依赖广泛训练数据的深度学习算法正在彻底改变图像恢复从令人虐待的测量。在许多成像应用中,培训数据稀缺,包括超高分辨率成像。引入了用于单次图像恢复的深图(DIP)算法,完全消除了对训练数据的需求。利用该方案的挑战是需要早期停止以最小化CNN参数的过度,以对测量中的噪声最小化。我们介绍了一般性的Stein的无偏见风险估计(GSURE)损失度量,以最大限度地减少过度装备。我们的实验表明,确定的方法最大限度地减少了过度装备的问题,从而提高了古典DIP方案的显着提高的性能。我们还使用CuSt-DIP方法与基于模型的展开架构,其通过直接反转方案提供了改进的性能。
translated by 谷歌翻译
Image reconstruction using deep learning algorithms offers improved reconstruction quality and lower reconstruction time than classical compressed sensing and model-based algorithms. Unfortunately, clean and fully sampled ground-truth data to train the deep networks is often unavailable in several applications, restricting the applicability of the above methods. We introduce a novel metric termed the ENsemble Stein's Unbiased Risk Estimate (ENSURE) framework, which can be used to train deep image reconstruction algorithms without fully sampled and noise-free images. The proposed framework is the generalization of the classical SURE and GSURE formulation to the setting where the images are sampled by different measurement operators, chosen randomly from a set. We evaluate the expectation of the GSURE loss functions over the sampling patterns to obtain the ENSURE loss function. We show that this loss is an unbiased estimate for the true mean-square error, which offers a better alternative to GSURE, which only offers an unbiased estimate for the projected error. Our experiments show that the networks trained with this loss function can offer reconstructions comparable to the supervised setting. While we demonstrate this framework in the context of MR image recovery, the ENSURE framework is generally applicable to arbitrary inverse problems.
translated by 谷歌翻译
建模是什么使广告有说服力的原因,即引起消费者的所需响应,对于宣传,社会心理学和营销的研究至关重要。尽管其重要性,但计算机视觉中说服力的计算建模仍处于起步阶段,这主要是由于缺乏可以提供与ADS相关的说服力标签的基准数据集。由社会心理学和市场营销中的说服文学的激励,我们引入了广泛的说服策略词汇,并建立了用说服策略注释的第一个AD图像语料库。然后,我们通过多模式学习制定说服策略预测的任务,在该任务中,我们设计了一个多任务注意融合模型,该模型可以利用其他广告理解的任务来预测说服策略。此外,我们对30家财富500家公司的1600个广告活动进行了真实的案例研究,我们使用模型的预测来分析哪些策略与不同的人口统计学(年龄和性别)一起使用。该数据集还提供图像分割掩码,该蒙版在测试拆分上标记了相应的AD图像中的说服力策略。我们公开发布代码和数据集https://midas-research.github.io/persuasion-avertisements/。
translated by 谷歌翻译
紧固件在确保机械的各个部位方面起着至关重要的作用。紧固件表面的凹痕,裂缝和划痕等变形是由材料特性和生产过程中设备的错误处理引起的。结果,需要质量控制以确保安全可靠的操作。现有的缺陷检查方法依赖于手动检查,该检查消耗了大量时间,金钱和其他资源;同样,由于人为错误,无法保证准确性。自动缺陷检测系统已证明对缺陷分析的手动检查技术有影响。但是,诸如卷积神经网络(CNN)和基于深度学习的方法之类的计算技术是进化方法。通过仔细选择设计参数值,可以实现CNN的全部电势。使用基于Taguchi的实验和分析设计,已经尝试在本研究中开发强大的自动系统。用于训练系统的数据集是为具有两个标记类别的M14尺寸螺母手动创建的:有缺陷且无缺陷。数据集中共有264张图像。所提出的顺序CNN的验证精度为96.3%,在0.001学习率下的验证损失为0.277。
translated by 谷歌翻译
我们考虑了最大化的影响(IM)问题:'如果我们能说服社交网络中的一部分个人采用新产品或创新,目的是触发大量的进一步收养级联我们应该定位吗?正式地,这是在社交网络中选择$ K $种子节点的任务,以使网络中预期的影响节点(在某些影响下传播模型)最大化。在文献中已经广泛研究了这个问题,并提出了几种解决方案方法。但是,大多数基于模拟的方法涉及耗时的蒙特卡洛模拟,以计算种子节点在整个网络中的影响。这限制了这些方法在大型社交网络上的适用性。在本文中,我们有兴趣以时间效率的方式解决影响最大化的问题。我们提出了一种社区意识的分歧和纠纷策略,涉及(i)学习社交网络的固有社区结构,(ii)通过解决每个社区的影响最大化问题,以及(iii)选择最终的影响力来生成候选解决方案。使用新颖的渐进预算计划来自候选解决方案的个人。我们提供有关现实世界社交网络的实验,表明所提出的算法在经验运行时和启发式算法方面优于基于仿真的算法。我们还研究了社区结构对算法性能的影响。我们的实验表明,具有较高模块化的社区结构导致所提出的算法在运行时和影响方面表现更好。
translated by 谷歌翻译
最紧迫的社会问题之一是与虚假新闻的斗争。虚假的主张很难暴露,造成了很多损害。为了解决这个问题,事实验证变得至关重要,因此是不同研究社区中感兴趣的话题。仅使用数据的文本形式,我们建议解决问题的解决方案,并通过其他方法实现竞争结果。我们基于两种方法(基于训练的语言模型)基于两种方法和基于提示的方法提供解决方案。基于PLM的方法使用传统的监督学习,其中训练模型以“ X”为输入和输出预测为P(Y | X)。鉴于,基于及时的学习反映了设计输入以适合模型的想法,以便可以将原始目标重新构成(掩盖)语言建模的问题。我们可能会进一步刺激PLM提供的丰富知识,以通过采用额外提示来微调PLM,以更好地完成下游任务。我们的实验表明,所提出的方法的性能不仅仅是微调PLM。我们在Trancify数据集中获得了0.6946的F1分数,在比赛负责人板上获得了第七名。
translated by 谷歌翻译
Quadruped robots are currently used in industrial robotics as mechanical aid to automate several routine tasks. However, presently, the usage of such a robot in a domestic setting is still very much a part of the research. This paper discusses the understanding and virtual simulation of such a robot capable of detecting and understanding human emotions, generating its gait, and responding via sounds and expression on a screen. To this end, we use a combination of reinforcement learning and software engineering concepts to simulate a quadruped robot that can understand emotions, navigate through various terrains and detect sound sources, and respond to emotions using audio-visual feedback. This paper aims to establish the framework of simulating a quadruped robot that is emotionally intelligent and can primarily respond to audio-visual stimuli using motor or audio response. The emotion detection from the speech was not as performant as ERANNs or Zeta Policy learning, still managing an accuracy of 63.5%. The video emotion detection system produced results that are almost at par with the state of the art, with an accuracy of 99.66%. Due to its "on-policy" learning process, the PPO algorithm was extremely rapid to learn, allowing the simulated dog to demonstrate a remarkably seamless gait across the different cadences and variations. This enabled the quadruped robot to respond to generated stimuli, allowing us to conclude that it functions as predicted and satisfies the aim of this work.
translated by 谷歌翻译
Searching long egocentric videos with natural language queries (NLQ) has compelling applications in augmented reality and robotics, where a fluid index into everything that a person (agent) has seen before could augment human memory and surface relevant information on demand. However, the structured nature of the learning problem (free-form text query inputs, localized video temporal window outputs) and its needle-in-a-haystack nature makes it both technically challenging and expensive to supervise. We introduce Narrations-as-Queries (NaQ), a data augmentation strategy that transforms standard video-text narrations into training data for a video query localization model. Validating our idea on the Ego4D benchmark, we find it has tremendous impact in practice. NaQ improves multiple top models by substantial margins (even doubling their accuracy), and yields the very best results to date on the Ego4D NLQ challenge, soundly outperforming all challenge winners in the CVPR and ECCV 2022 competitions and topping the current public leaderboard. Beyond achieving the state-of-the-art for NLQ, we also demonstrate unique properties of our approach such as gains on long-tail object queries, and the ability to perform zero-shot and few-shot NLQ.
translated by 谷歌翻译
Machine Translation (MT) system generally aims at automatic representation of source language into target language retaining the originality of context using various Natural Language Processing (NLP) techniques. Among various NLP methods, Statistical Machine Translation(SMT). SMT uses probabilistic and statistical techniques to analyze information and conversion. This paper canvasses about the development of bilingual SMT models for translating English to fifteen low-resource Indian Languages (ILs) and vice versa. At the outset, all 15 languages are briefed with a short description related to our experimental need. Further, a detailed analysis of Samanantar and OPUS dataset for model building, along with standard benchmark dataset (Flores-200) for fine-tuning and testing, is done as a part of our experiment. Different preprocessing approaches are proposed in this paper to handle the noise of the dataset. To create the system, MOSES open-source SMT toolkit is explored. Distance reordering is utilized with the aim to understand the rules of grammar and context-dependent adjustments through a phrase reordering categorization framework. In our experiment, the quality of the translation is evaluated using standard metrics such as BLEU, METEOR, and RIBES
translated by 谷歌翻译
We introduce Argoverse 2 (AV2) - a collection of three datasets for perception and forecasting research in the self-driving domain. The annotated Sensor Dataset contains 1,000 sequences of multimodal data, encompassing high-resolution imagery from seven ring cameras, and two stereo cameras in addition to lidar point clouds, and 6-DOF map-aligned pose. Sequences contain 3D cuboid annotations for 26 object categories, all of which are sufficiently-sampled to support training and evaluation of 3D perception models. The Lidar Dataset contains 20,000 sequences of unlabeled lidar point clouds and map-aligned pose. This dataset is the largest ever collection of lidar sensor data and supports self-supervised learning and the emerging task of point cloud forecasting. Finally, the Motion Forecasting Dataset contains 250,000 scenarios mined for interesting and challenging interactions between the autonomous vehicle and other actors in each local scene. Models are tasked with the prediction of future motion for "scored actors" in each scenario and are provided with track histories that capture object location, heading, velocity, and category. In all three datasets, each scenario contains its own HD Map with 3D lane and crosswalk geometry - sourced from data captured in six distinct cities. We believe these datasets will support new and existing machine learning research problems in ways that existing datasets do not. All datasets are released under the CC BY-NC-SA 4.0 license.
translated by 谷歌翻译